146 research outputs found
High-Precision Localization Using Ground Texture
Location-aware applications play an increasingly critical role in everyday
life. However, satellite-based localization (e.g., GPS) has limited accuracy
and can be unusable in dense urban areas and indoors. We introduce an
image-based global localization system that is accurate to a few millimeters
and performs reliable localization both indoors and outside. The key idea is to
capture and index distinctive local keypoints in ground textures. This is based
on the observation that ground textures including wood, carpet, tile, concrete,
and asphalt may look random and homogeneous, but all contain cracks, scratches,
or unique arrangements of fibers. These imperfections are persistent, and can
serve as local features. Our system incorporates a downward-facing camera to
capture the fine texture of the ground, together with an image processing
pipeline that locates the captured texture patch in a compact database
constructed offline. We demonstrate the capability of our system to robustly,
accurately, and quickly locate test images on various types of outdoor and
indoor ground surfaces
Computational Light Routing: 3D Printed Optical Fibers for Sensing and Display
Despite recent interest in digital fabrication, there are still few algorithms that provide control over how light propagates inside a solid object. Existing methods either work only on the surface or restrict themselves to light diffusion in volumes. We use multi-material 3D printing to fabricate objects with embedded optical fibers, exploiting total internal reflection to guide light inside an object. We introduce automatic fiber design algorithms together with new manufacturing techniques to route light between two arbitrary surfaces. Our implicit algorithm optimizes light transmission by minimizing fiber curvature and maximizing fiber separation while respecting constraints such as fiber arrival angle. We also discuss the influence of different printable materials and fiber geometry on light propagation in the volume and the light angular distribution when exiting the fiber. Our methods enable new applications such as surface displays of arbitrary shape, touch-based painting of surfaces, and sensing a hemispherical light distribution in a single shot.National Science Foundation (U.S.) (Grant CCF-1012147)National Science Foundation (U.S.) (Grant IIS-1116296)United States. Defense Advanced Research Projects Agency (Grant N66001-12-1-4242)Intel Corporation (Science and Technology Center for Visual Computing)Alfred P. Sloan Foundation (Sloan Research Fellowship
Constructing Printable Surfaces with View-Dependent Appearance
We present a method for the digital fabrication of surfaces whose appearance
varies based on viewing direction. The surfaces are constructed from a mesh of
bars arranged in a self-occluding colored heightfield that creates the desired
view-dependent effects. At the heart of our method is a novel and simple
differentiable rendering algorithm specifically designed to render colored 3D
heightfields and enable efficient calculation of the gradient of appearance
with respect to heights and colors. This algorithm forms the basis of a
coarse-to-fine ML-based optimization process that adjusts the heights and
colors of the strips to minimize the loss between the desired and real surface
appearance from each viewpoint, deriving meshes that can then be fabricated
using a 3D printer. Using our method, we demonstrate both synthetic and
real-world fabricated results with view-dependent appearance.Comment: 10 pages, 16 figure
Clutter Detection and Removal in 3D Scenes with View-Consistent Inpainting
Removing clutter from scenes is essential in many applications, ranging from
privacy-concerned content filtering to data augmentation. In this work, we
present an automatic system that removes clutter from 3D scenes and inpaints
with coherent geometry and texture. We propose techniques for its two key
components: 3D segmentation from shared properties and 3D inpainting, both of
which are important problems. The definition of 3D scene clutter
(frequently-moving objects) is not well captured by commonly-studied object
categories in computer vision. To tackle the lack of well-defined clutter
annotations, we group noisy fine-grained labels, leverage virtual rendering,
and impose an instance-level area-sensitive loss. Once clutter is removed, we
inpaint geometry and texture in the resulting holes by merging inpainted RGB-D
images. This requires novel voting and pruning strategies that guarantee
multi-view consistency across individually inpainted images for mesh
reconstruction. Experiments on ScanNet and Matterport dataset show that our
method outperforms baselines for clutter segmentation and 3D inpainting, both
visually and quantitatively.Comment: 18 pages. ICCV 2023. Project page:
https://weify627.github.io/clutter
Chopper: Partitioning models into 3D-printable parts
3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.National Science Foundation (U.S.) (Grant CCF-1012147)National Science Foundation (U.S.) (Grant IIS-1116296)Intel Corporation (Science and Technology Center for Visual Computing
Hand Pose Estimation with Mems-Ultrasonic Sensors
Hand tracking is an important aspect of human-computer interaction and has a
wide range of applications in extended reality devices. However, current hand
motion capture methods suffer from various limitations. For instance,
visual-based hand pose estimation is susceptible to self-occlusion and changes
in lighting conditions, while IMU-based tracking gloves experience significant
drift and are not resistant to external magnetic field interference. To address
these issues, we propose a novel and low-cost hand-tracking glove that utilizes
several MEMS-ultrasonic sensors attached to the fingers, to measure the
distance matrix among the sensors. Our lightweight deep network then
reconstructs the hand pose from the distance matrix. Our experimental results
demonstrate that this approach is both accurate, size-agnostic, and robust to
external interference. We also show the design logic for the sensor selection,
sensor configurations, circuit diagram, as well as model architecture
Gradient-Based Dovetail Joint Shape Optimization for Stiffness
It is common to manufacture an object by decomposing it into parts that can
be assembled. This decomposition is often required by size limits of the
machine, the complex structure of the shape, etc. To make it possible to easily
assemble the final object, it is often desirable to design geometry that
enables robust connections between the subcomponents. In this project, we study
the task of dovetail-joint shape optimization for stiffness using
gradient-based optimization. This optimization requires a differentiable
simulator that is capable of modeling the contact between the two parts of a
joint, making it possible to reason about the gradient of the stiffness with
respect to shape parameters. Our simulation approach uses a penalty method that
alternates between optimizing each side of the joint, using the adjoint method
to compute gradients. We test our method by optimizing the joint shapes in
three different joint shape spaces, and evaluate optimized joint shapes in both
simulation and real-world tests. The experiments show that optimized joint
shapes achieve higher stiffness, both synthetically and in real-world tests.Comment: ACM SCF 2023: Proceedings of the 8th Annual ACM Symposium on
Computational Fabricatio
Recommended from our members
Factored Time-Lapse Video
We describe a method for converting time-lapse photography captured with outdoor cameras into Factored Time-Lapse Video (FTLV): a video in which time appears to move faster (i.e., lapsing) and where data at each pixel has been factored into shadow, illumination, and reflectance components. The factorization allows a user to easily relight the scene, recover a portion of the scene geometry (normals), and to perform advanced image editing operations. Our method is easy to implement, robust, and provides a compact representation with good reconstruction characteristics. We show results using several publicly available time-lapse sequences.Engineering and Applied Science
Recommended from our members
A Statistical Model for Synthesis of Detailed Facial Geometry
Detailed surface geometry contributes greatly to the visual realism of 3D face models. However, acquiring high-resolution face geometry is often tedious and expensive. Consequently, most face models used in games, virtual reality, or computer vision look unrealistically smooth. In this paper, we introduce a new statistical technique for the analysis and synthesis of small three-dimensional facial features, such as wrinkles and pores. We acquire high-resolution face geometry for people across a wide range of ages, genders, and races. For each scan, we separate the skin surface details from a smooth base mesh using displaced subdivision surfaces. Then, we analyze the resulting displacement maps using the texture analysis/synthesis framework of Heeger and Bergen, adapted to capture statistics that vary spatially across a face. Finally, we use the extracted statistics to synthesize plausible detail on face meshes of arbitrary subjects. We demonstrate the effectiveness of this method in several applications, including analysis of facial texture in subjects with different ages and genders, interpolation between high-resolution face scans, adding detail to low-resolution face scans, and adjusting the apparent age of faces. In all cases, we are able to re-produce fine geometric details consistent with those observed in high resolution scans.Engineering and Applied Science
- …